DonorsChoose

DonorsChoose.org receives hundreds of thousands of project proposals each year for classroom projects in need of funding. Right now, a large number of volunteers is needed to manually screen each submission before it's approved to be posted on the DonorsChoose.org website.

Next year, DonorsChoose.org expects to receive close to 500,000 project proposals. As a result, there are three main problems they need to solve:

  • How to scale current manual processes and resources to screen 500,000 projects so that they can be posted as quickly and as efficiently as possible
  • How to increase the consistency of project vetting across different volunteers to improve the experience for teachers
  • How to focus volunteer time on the applications that need the most assistance

The goal of the competition is to predict whether or not a DonorsChoose.org project proposal submitted by a teacher will be approved, using the text of project descriptions as well as additional metadata about the project, teacher, and school. DonorsChoose.org can then use this information to identify projects most likely to need further review before approval.

About the DonorsChoose Data Set

The train.csv data set provided by DonorsChoose contains the following features:

Feature Description
project_id A unique identifier for the proposed project. Example: p036502
project_title Title of the project. Examples:
  • Art Will Make You Happy!
  • First Grade Fun
project_grade_category Grade level of students for which the project is targeted. One of the following enumerated values:
  • Grades PreK-2
  • Grades 3-5
  • Grades 6-8
  • Grades 9-12
project_subject_categories One or more (comma-separated) subject categories for the project from the following enumerated list of values:
  • Applied Learning
  • Care & Hunger
  • Health & Sports
  • History & Civics
  • Literacy & Language
  • Math & Science
  • Music & The Arts
  • Special Needs
  • Warmth

Examples:
  • Music & The Arts
  • Literacy & Language, Math & Science
school_state State where school is located (Two-letter U.S. postal code). Example: WY
project_subject_subcategories One or more (comma-separated) subject subcategories for the project. Examples:
  • Literacy
  • Literature & Writing, Social Sciences
project_resource_summary An explanation of the resources needed for the project. Example:
  • My students need hands on literacy materials to manage sensory needs!
project_essay_1 First application essay*
project_essay_2 Second application essay*
project_essay_3 Third application essay*
project_essay_4 Fourth application essay*
project_submitted_datetime Datetime when project application was submitted. Example: 2016-04-28 12:43:56.245
teacher_id A unique identifier for the teacher of the proposed project. Example: bdf8baa8fedef6bfeec7ae4ff1c15c56
teacher_prefix Teacher's title. One of the following enumerated values:
  • nan
  • Dr.
  • Mr.
  • Mrs.
  • Ms.
  • Teacher.
teacher_number_of_previously_posted_projects Number of project applications previously submitted by the same teacher. Example: 2

* See the section Notes on the Essay Data for more details about these features.

Additionally, the resources.csv data set provides more data about the resources required for each project. Each line in this file represents a resource required by a project:

Feature Description
id A project_id value from the train.csv file. Example: p036502
description Desciption of the resource. Example: Tenor Saxophone Reeds, Box of 25
quantity Quantity of the resource required. Example: 3
price Price of the resource required. Example: 9.95

Note: Many projects require multiple resources. The id value corresponds to a project_id in train.csv, so you use it as a key to retrieve all resources needed for a project:

The data set contains the following label (the value you will attempt to predict):

Label Description
project_is_approved A binary flag indicating whether DonorsChoose approved the project. A value of 0 indicates the project was not approved, and a value of 1 indicates the project was approved.

Notes on the Essay Data

    Prior to May 17, 2016, the prompts for the essays were as follows:
  • __project_essay_1:__ "Introduce us to your classroom"
  • __project_essay_2:__ "Tell us more about your students"
  • __project_essay_3:__ "Describe how your students will use the materials you're requesting"
  • __project_essay_3:__ "Close by sharing why your project will make a difference"
    Starting on May 17, 2016, the number of essays was reduced from 4 to 2, and the prompts for the first 2 essays were changed to the following:
  • __project_essay_1:__ "Describe your students: What makes your students special? Specific details about their background, your neighborhood, and your school are all helpful."
  • __project_essay_2:__ "About your project: How will these materials make a difference in your students' learning and improve their school lives?"

  • For all projects with project_submitted_datetime of 2016-05-17 and later, the values of project_essay_3 and project_essay_4 will be NaN.
In [1]:
# Note - several code snippets have been used from the following link: https://colab.research.google.com/drive/1EkYHI-vGKnURqLL_u5LEf3yb0YJBVbZW
# This link was provided by the Appliedai team to answer a question about data leakage
In [2]:
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")

import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer

import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer

from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import pickle

from tqdm import tqdm
import os

import plotly
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
from collections import Counter

#save matplotlib default parameters.  When testing I found that the plot changed after running seaborn heatmap
import matplotlib as mpl
inline_rc = dict(mpl.rcParams)

1.1 Reading Data

In [3]:
#Use all records and test running time
project_data = pd.read_csv('train_data.csv')
resource_data = pd.read_csv('resources.csv')
In [4]:
print("Number of data points in train data", project_data.shape)
print('-'*50)
print("The attributes of data :", project_data.columns.values)
Number of data points in train data (109248, 17)
--------------------------------------------------
The attributes of data : ['Unnamed: 0' 'id' 'teacher_id' 'teacher_prefix' 'school_state'
 'project_submitted_datetime' 'project_grade_category'
 'project_subject_categories' 'project_subject_subcategories'
 'project_title' 'project_essay_1' 'project_essay_2' 'project_essay_3'
 'project_essay_4' 'project_resource_summary'
 'teacher_number_of_previously_posted_projects' 'project_is_approved']
In [5]:
# how to replace elements in list python: https://stackoverflow.com/a/2582163/4084039
cols = ['Date' if x=='project_submitted_datetime' else x for x in list(project_data.columns)]


#sort dataframe based on time pandas python: https://stackoverflow.com/a/49702492/4084039
project_data['Date'] = pd.to_datetime(project_data['project_submitted_datetime'])
project_data.drop('project_submitted_datetime', axis=1, inplace=True)
project_data.sort_values(by=['Date'], inplace=True)


# how to reorder columns pandas python: https://stackoverflow.com/a/13148611/4084039
project_data = project_data[cols]


project_data.head(2)
Out[5]:
Unnamed: 0 id teacher_id teacher_prefix school_state Date project_grade_category project_subject_categories project_subject_subcategories project_title project_essay_1 project_essay_2 project_essay_3 project_essay_4 project_resource_summary teacher_number_of_previously_posted_projects project_is_approved
55660 8393 p205479 2bf07ba08945e5d8b2a3f269b2b3cfe5 Mrs. CA 2016-04-27 00:27:36 Grades PreK-2 Math & Science Applied Sciences, Health & Life Science Engineering STEAM into the Primary Classroom I have been fortunate enough to use the Fairy ... My students come from a variety of backgrounds... Each month I try to do several science or STEM... It is challenging to develop high quality scie... My students need STEM kits to learn critical s... 53 1
76127 37728 p043609 3f60494c61921b3b43ab61bdde2904df Ms. UT 2016-04-27 00:31:25 Grades 3-5 Special Needs Special Needs Sensory Tools for Focus Imagine being 8-9 years old. You're in your th... Most of my students have autism, anxiety, anot... It is tough to do more than one thing at a tim... When my students are able to calm themselves d... My students need Boogie Boards for quiet senso... 4 1
In [6]:
print("Number of data points in train data", resource_data.shape)
print(resource_data.columns.values)
resource_data.head(2)
Number of data points in train data (1541272, 4)
['id' 'description' 'quantity' 'price']
Out[6]:
id description quantity price
0 p233245 LC652 - Lakeshore Double-Space Mobile Drying Rack 1 149.00
1 p069063 Bouncy Bands for Desks (Blue support pipes) 3 14.95

1.2 preprocessing of project_subject_categories

In [7]:
catogories = list(project_data['project_subject_categories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039

# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
cat_list = []
for i in catogories:
    temp = ""
    # consider we have text like this "Math & Science, Warmth, Care & Hunger"
    for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
        if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
            j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
        j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
        temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
        temp = temp.replace('&','_') # we are replacing the & value into 
    cat_list.append(temp.strip())
    
project_data['clean_categories'] = cat_list
project_data.drop(['project_subject_categories'], axis=1, inplace=True)

1.3 preprocessing of project_subject_subcategories

In [8]:
sub_catogories = list(project_data['project_subject_subcategories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039

# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python

sub_cat_list = []
for i in sub_catogories:
    temp = ""
    # consider we have text like this "Math & Science, Warmth, Care & Hunger"
    for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
        if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
            j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
        j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
        temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
        temp = temp.replace('&','_')
    sub_cat_list.append(temp.strip())

project_data['clean_subcategories'] = sub_cat_list
project_data.drop(['project_subject_subcategories'], axis=1, inplace=True)

1.3 Text preprocessing

In [9]:
# merge two column text dataframe: 
project_data["essay"] = project_data["project_essay_1"].map(str) +\
                        project_data["project_essay_2"].map(str) + \
                        project_data["project_essay_3"].map(str) + \
                        project_data["project_essay_4"].map(str)
In [10]:
project_data.head(2)
Out[10]:
Unnamed: 0 id teacher_id teacher_prefix school_state Date project_grade_category project_title project_essay_1 project_essay_2 project_essay_3 project_essay_4 project_resource_summary teacher_number_of_previously_posted_projects project_is_approved clean_categories clean_subcategories essay
55660 8393 p205479 2bf07ba08945e5d8b2a3f269b2b3cfe5 Mrs. CA 2016-04-27 00:27:36 Grades PreK-2 Engineering STEAM into the Primary Classroom I have been fortunate enough to use the Fairy ... My students come from a variety of backgrounds... Each month I try to do several science or STEM... It is challenging to develop high quality scie... My students need STEM kits to learn critical s... 53 1 Math_Science AppliedSciences Health_LifeScience I have been fortunate enough to use the Fairy ...
76127 37728 p043609 3f60494c61921b3b43ab61bdde2904df Ms. UT 2016-04-27 00:31:25 Grades 3-5 Sensory Tools for Focus Imagine being 8-9 years old. You're in your th... Most of my students have autism, anxiety, anot... It is tough to do more than one thing at a tim... When my students are able to calm themselves d... My students need Boogie Boards for quiet senso... 4 1 SpecialNeeds SpecialNeeds Imagine being 8-9 years old. You're in your th...

1.4.2.3 Using Pretrained Models: TFIDF weighted W2V

In [11]:
# printing some random reviews
print(project_data['essay'].values[0])
print("="*50)
print(project_data['essay'].values[150])
print("="*50)
print(project_data['essay'].values[1000])
print("="*50)
print(project_data['essay'].values[20000])
print("="*50)
print(project_data['essay'].values[99999])
print("="*50)
I have been fortunate enough to use the Fairy Tale STEM kits in my classroom as well as the STEM journals, which my students really enjoyed.  I would love to implement more of the Lakeshore STEM kits in my classroom for the next school year as they provide excellent and engaging STEM lessons.My students come from a variety of backgrounds, including language and socioeconomic status.  Many of them don't have a lot of experience in science and engineering and these kits give me the materials to provide these exciting opportunities for my students.Each month I try to do several science or STEM/STEAM projects.  I would use the kits and robot to help guide my science instruction in engaging and meaningful ways.  I can adapt the kits to my current language arts pacing guide where we already teach some of the material in the kits like tall tales (Paul Bunyan) or Johnny Appleseed.  The following units will be taught in the next school year where I will implement these kits: magnets, motion, sink vs. float, robots.  I often get to these units and don't know If I am teaching the right way or using the right materials.    The kits will give me additional ideas, strategies, and lessons to prepare my students in science.It is challenging to develop high quality science activities.  These kits give me the materials I need to provide my students with science activities that will go along with the curriculum in my classroom.  Although I have some things (like magnets) in my classroom, I don't know how to use them effectively.  The kits will provide me with the right amount of materials and show me how to use them in an appropriate way.
==================================================
I teach high school English to students with learning and behavioral disabilities. My students all vary in their ability level. However, the ultimate goal is to increase all students literacy levels. This includes their reading, writing, and communication levels.I teach a really dynamic group of students. However, my students face a lot of challenges. My students all live in poverty and in a dangerous neighborhood. Despite these challenges, I have students who have the the desire to defeat these challenges. My students all have learning disabilities and currently all are performing below grade level. My students are visual learners and will benefit from a classroom that fulfills their preferred learning style.The materials I am requesting will allow my students to be prepared for the classroom with the necessary supplies.  Too often I am challenged with students who come to school unprepared for class due to economic challenges.  I want my students to be able to focus on learning and not how they will be able to get school supplies.  The supplies will last all year.  Students will be able to complete written assignments and maintain a classroom journal.  The chart paper will be used to make learning more visual in class and to create posters to aid students in their learning.  The students have access to a classroom printer.  The toner will be used to print student work that is completed on the classroom Chromebooks.I want to try and remove all barriers for the students learning and create opportunities for learning. One of the biggest barriers is the students not having the resources to get pens, paper, and folders. My students will be able to increase their literacy skills because of this project.
==================================================
\"Life moves pretty fast. If you don't stop and look around once in awhile, you could miss it.\"  from the movie, Ferris Bueller's Day Off.  Think back...what do you remember about your grandparents?  How amazing would it be to be able to flip through a book to see a day in their lives?My second graders are voracious readers! They love to read both fiction and nonfiction books.  Their favorite characters include Pete the Cat, Fly Guy, Piggie and Elephant, and Mercy Watson. They also love to read about insects, space and plants. My students are hungry bookworms! My students are eager to learn and read about the world around them. My kids love to be at school and are like little sponges absorbing everything around them. Their parents work long hours and usually do not see their children. My students are usually cared for by their grandparents or a family friend. Most of my students do not have someone who speaks English at home. Thus it is difficult for my students to acquire language.Now think forward... wouldn't it mean a lot to your kids, nieces or nephews or grandchildren, to be able to see a day in your life today 30 years from now? Memories are so precious to us and being able to share these memories with future generations will be a rewarding experience.  As part of our social studies curriculum, students will be learning about changes over time.  Students will be studying photos to learn about how their community has changed over time.  In particular, we will look at photos to study how the land, buildings, clothing, and schools have changed over time.  As a culminating activity, my students will capture a slice of their history and preserve it through scrap booking. Key important events in their young lives will be documented with the date, location, and names.   Students will be using photos from home and from school to create their second grade memories.   Their scrap books will preserve their unique stories for future generations to enjoy.Your donation to this project will provide my second graders with an opportunity to learn about social studies in a fun and creative manner.  Through their scrapbooks, children will share their story with others and have a historical document for the rest of their lives.
==================================================
\"A person's a person, no matter how small.\" (Dr.Seuss) I teach the smallest students with the biggest enthusiasm for learning. My students learn in many different ways using all of our senses and multiple intelligences. I use a wide range of techniques to help all my students succeed. \r\nStudents in my class come from a variety of different backgrounds which makes for wonderful sharing of experiences and cultures, including Native Americans.\r\nOur school is a caring community of successful learners which can be seen through collaborative student project based learning in and out of the classroom. Kindergarteners in my class love to work with hands-on materials and have many different opportunities to practice a skill before it is mastered. Having the social skills to work cooperatively with friends is a crucial aspect of the kindergarten curriculum.Montana is the perfect place to learn about agriculture and nutrition. My students love to role play in our pretend kitchen in the early childhood classroom. I have had several kids ask me, \"Can we try cooking with REAL food?\" I will take their idea and create \"Common Core Cooking Lessons\" where we learn important math and writing concepts while cooking delicious healthy food for snack time. My students will have a grounded appreciation for the work that went into making the food and knowledge of where the ingredients came from as well as how it's healthy for their bodies. This project would expand our learning of nutrition and agricultural cooking recipes by having us peel our own apples to make homemade applesauce, make our own bread, and mix up healthy plants from our classroom garden in the spring. We will also create our own cookbooks to be printed and shared with families. \r\nStudents will gain math and literature skills as well as a life long enjoyment for healthy cooking.nannan
==================================================
My classroom consists of twenty-two amazing sixth graders from different cultures and backgrounds. They are a social bunch who enjoy working in partners and working with groups. They are hard-working and eager to head to middle school next year. My job is to get them ready to make this transition and make it as smooth as possible. In order to do this, my students need to come to school every day and feel safe and ready to learn. Because they are getting ready to head to middle school, I give them lots of choice- choice on where to sit and work, the order to complete assignments, choice of projects, etc. Part of the students feeling safe is the ability for them to come into a welcoming, encouraging environment. My room is colorful and the atmosphere is casual. I want them to take ownership of the classroom because we ALL share it together. Because my time with them is limited, I want to ensure they get the most of this time and enjoy it to the best of their abilities.Currently, we have twenty-two desks of differing sizes, yet the desks are similar to the ones the students will use in middle school. We also have a kidney table with crates for seating. I allow my students to choose their own spots while they are working independently or in groups. More often than not, most of them move out of their desks and onto the crates. Believe it or not, this has proven to be more successful than making them stay at their desks! It is because of this that I am looking toward the “Flexible Seating” option for my classroom.\r\n The students look forward to their work time so they can move around the room. I would like to get rid of the constricting desks and move toward more “fun” seating options. I am requesting various seating so my students have more options to sit. Currently, I have a stool and a papasan chair I inherited from the previous sixth-grade teacher as well as five milk crate seats I made, but I would like to give them more options and reduce the competition for the “good seats”. I am also requesting two rugs as not only more seating options but to make the classroom more welcoming and appealing. In order for my students to be able to write and complete work without desks, I am requesting a class set of clipboards. Finally, due to curriculum that requires groups to work together, I am requesting tables that we can fold up when we are not using them to leave more room for our flexible seating options.\r\nI know that with more seating options, they will be that much more excited about coming to school! Thank you for your support in making my classroom one students will remember forever!nannan
==================================================
In [12]:
# https://stackoverflow.com/a/47091490/4084039
import re

def decontracted(phrase):
    # specific
    phrase = re.sub(r"won't", "will not", phrase)
    phrase = re.sub(r"can\'t", "can not", phrase)

    # general
    phrase = re.sub(r"n\'t", " not", phrase)
    phrase = re.sub(r"\'re", " are", phrase)
    phrase = re.sub(r"\'s", " is", phrase)
    phrase = re.sub(r"\'d", " would", phrase)
    phrase = re.sub(r"\'ll", " will", phrase)
    phrase = re.sub(r"\'t", " not", phrase)
    phrase = re.sub(r"\'ve", " have", phrase)
    phrase = re.sub(r"\'m", " am", phrase)
    return phrase
In [13]:
sent = decontracted(project_data['essay'].values[20000])
print(sent)
print("="*50)
\"A person is a person, no matter how small.\" (Dr.Seuss) I teach the smallest students with the biggest enthusiasm for learning. My students learn in many different ways using all of our senses and multiple intelligences. I use a wide range of techniques to help all my students succeed. \r\nStudents in my class come from a variety of different backgrounds which makes for wonderful sharing of experiences and cultures, including Native Americans.\r\nOur school is a caring community of successful learners which can be seen through collaborative student project based learning in and out of the classroom. Kindergarteners in my class love to work with hands-on materials and have many different opportunities to practice a skill before it is mastered. Having the social skills to work cooperatively with friends is a crucial aspect of the kindergarten curriculum.Montana is the perfect place to learn about agriculture and nutrition. My students love to role play in our pretend kitchen in the early childhood classroom. I have had several kids ask me, \"Can we try cooking with REAL food?\" I will take their idea and create \"Common Core Cooking Lessons\" where we learn important math and writing concepts while cooking delicious healthy food for snack time. My students will have a grounded appreciation for the work that went into making the food and knowledge of where the ingredients came from as well as how it is healthy for their bodies. This project would expand our learning of nutrition and agricultural cooking recipes by having us peel our own apples to make homemade applesauce, make our own bread, and mix up healthy plants from our classroom garden in the spring. We will also create our own cookbooks to be printed and shared with families. \r\nStudents will gain math and literature skills as well as a life long enjoyment for healthy cooking.nannan
==================================================
In [14]:
# \r \n \t remove from string python: http://texthandler.com/info/remove-line-breaks-python/
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
print(sent)
 A person is a person, no matter how small.  (Dr.Seuss) I teach the smallest students with the biggest enthusiasm for learning. My students learn in many different ways using all of our senses and multiple intelligences. I use a wide range of techniques to help all my students succeed.   Students in my class come from a variety of different backgrounds which makes for wonderful sharing of experiences and cultures, including Native Americans.  Our school is a caring community of successful learners which can be seen through collaborative student project based learning in and out of the classroom. Kindergarteners in my class love to work with hands-on materials and have many different opportunities to practice a skill before it is mastered. Having the social skills to work cooperatively with friends is a crucial aspect of the kindergarten curriculum.Montana is the perfect place to learn about agriculture and nutrition. My students love to role play in our pretend kitchen in the early childhood classroom. I have had several kids ask me,  Can we try cooking with REAL food?  I will take their idea and create  Common Core Cooking Lessons  where we learn important math and writing concepts while cooking delicious healthy food for snack time. My students will have a grounded appreciation for the work that went into making the food and knowledge of where the ingredients came from as well as how it is healthy for their bodies. This project would expand our learning of nutrition and agricultural cooking recipes by having us peel our own apples to make homemade applesauce, make our own bread, and mix up healthy plants from our classroom garden in the spring. We will also create our own cookbooks to be printed and shared with families.   Students will gain math and literature skills as well as a life long enjoyment for healthy cooking.nannan
In [15]:
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
print(sent)
 A person is a person no matter how small Dr Seuss I teach the smallest students with the biggest enthusiasm for learning My students learn in many different ways using all of our senses and multiple intelligences I use a wide range of techniques to help all my students succeed Students in my class come from a variety of different backgrounds which makes for wonderful sharing of experiences and cultures including Native Americans Our school is a caring community of successful learners which can be seen through collaborative student project based learning in and out of the classroom Kindergarteners in my class love to work with hands on materials and have many different opportunities to practice a skill before it is mastered Having the social skills to work cooperatively with friends is a crucial aspect of the kindergarten curriculum Montana is the perfect place to learn about agriculture and nutrition My students love to role play in our pretend kitchen in the early childhood classroom I have had several kids ask me Can we try cooking with REAL food I will take their idea and create Common Core Cooking Lessons where we learn important math and writing concepts while cooking delicious healthy food for snack time My students will have a grounded appreciation for the work that went into making the food and knowledge of where the ingredients came from as well as how it is healthy for their bodies This project would expand our learning of nutrition and agricultural cooking recipes by having us peel our own apples to make homemade applesauce make our own bread and mix up healthy plants from our classroom garden in the spring We will also create our own cookbooks to be printed and shared with families Students will gain math and literature skills as well as a life long enjoyment for healthy cooking nannan
In [16]:
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
stopwords= {'i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
            "you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
            'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
            'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
            'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
            'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
            'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
            'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
            'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
            'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
            's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
            've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
            "hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
            "mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
            'won', "won't", 'wouldn', "wouldn't"}
In [17]:
# Combining all the above stundents 
from tqdm import tqdm
preprocessed_essays = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['essay'].values):
    sent = decontracted(sentance)
    sent = sent.replace('\\r', ' ')
    sent = sent.replace('\\"', ' ')
    sent = sent.replace('\\n', ' ')
    sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
    # https://gist.github.com/sebleier/554280
    sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords)
    preprocessed_essays.append(sent.lower().strip())
100%|████████████████████████████████████████████████████████████████████████| 109248/109248 [00:25<00:00, 4224.08it/s]
In [18]:
# after preprocesing
preprocessed_essays[20000]
Out[18]:
'person person no matter small dr seuss teach smallest students biggest enthusiasm learning students learn many different ways using senses multiple intelligences use wide range techniques help students succeed students class come variety different backgrounds makes wonderful sharing experiences cultures including native americans school caring community successful learners seen collaborative student project based learning classroom kindergarteners class love work hands materials many different opportunities practice skill mastered social skills work cooperatively friends crucial aspect kindergarten curriculum montana perfect place learn agriculture nutrition students love role play pretend kitchen early childhood classroom several kids ask try cooking real food take idea create common core cooking lessons learn important math writing concepts cooking delicious healthy food snack time students grounded appreciation work went making food knowledge ingredients came well healthy bodies project would expand learning nutrition agricultural cooking recipes us peel apples make homemade applesauce make bread mix healthy plants classroom garden spring also create cookbooks printed shared families students gain math literature skills well life long enjoyment healthy cooking nannan'
In [19]:
project_data['essay'] = preprocessed_essays
In [20]:
#words in essays
project_data['words_in_essay'] = project_data['essay'].str.split().apply(len).value_counts()
project_data['words_in_essay'].fillna(0, inplace=True)

1.4 Preprocessing of `project_title`

In [21]:
# similarly you can preprocess the titles also
preprocessed_titles = []
for sentance in tqdm(project_data['project_title'].values):
    sent = decontracted(sentance)
    sent = sent.replace('\\r', ' ')
    sent = sent.replace('\\"', ' ')
    sent = sent.replace('\\n', ' ')
    sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
    # https://gist.github.com/sebleier/554280
    sent = ' '.join(e for e in sent.split() if e not in stopwords)
    preprocessed_titles.append(sent.lower().strip())
100%|███████████████████████████████████████████████████████████████████████| 109248/109248 [00:02<00:00, 51743.84it/s]
In [22]:
# after preprocesing
preprocessed_titles[1000]
Out[22]:
'empowering students through art learning about then now'
In [23]:
project_data['project_title'] = preprocessed_titles
In [24]:
#words in title
project_data['words_in_title'] = project_data['project_title'].str.split().apply(len).value_counts()
project_data['words_in_title'].fillna(0, inplace=True)

1.5 Preprocessing of `project_grade_category`

In [25]:
#uunique values:
#array(['Grades PreK-2', 'Grades 9-12', 'Grades 6-8', 'Grades 3-5'],
#      dtype=object)

#preprocess project_grade_category for CountVectorizer
project_data['project_grade_category'] = project_data['project_grade_category'].str.replace(' ', '_')
project_data['project_grade_category'] = project_data['project_grade_category'].str.replace('-', '_')

1.6 Preprocessing of `Computing Sentiment Scores`

In [26]:
#https://stackoverflow.com/questions/13842088/set-value-for-particular-cell-in-pandas-dataframe-using-index
# In [18]: %timeit df.set_value('C', 'x', 10)
# 100000 loops, best of 3: 2.9 µs per loop

# In [20]: %timeit df['x']['C'] = 10
# 100000 loops, best of 3: 6.31 µs per loop

# In [81]: %timeit df.at['C', 'x'] = 10
# 100000 loops, best of 3: 9.2 µs per loop




import nltk
from nltk.sentiment.vader import SentimentIntensityAnalyzer

# import nltk
# nltk.download('vader_lexicon')

sid = SentimentIntensityAnalyzer()

project_data['neg'] = 0.0
project_data['neu'] = 0.0
project_data['pos'] = 0.0
project_data['compound'] = 0.0
for index, row in project_data.iterrows():
    ss = sid.polarity_scores(row['essay'])
    project_data.set_value(index, 'neg', ss['neg'])
    project_data.set_value(index, 'neu', ss['neu'])
    project_data.set_value(index, 'pos', ss['pos'])
    project_data.set_value(index, 'compound', ss['compound'])
    

# we can use these 4 things as features/attributes (neg, neu, pos, compound)
# neg: 0.0, neu: 0.753, pos: 0.247, compound: 0.93
C:\Users\francisco.porrata\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\twitter\__init__.py:20: UserWarning:

The twython library has not been installed. Some functionality from the twitter package will not be available.

In [27]:
project_data[['neg','neu','pos','compound']].head()
Out[27]:
neg neu pos compound
55660 0.013 0.773 0.214 0.9867
76127 0.078 0.650 0.272 0.9899
51140 0.016 0.706 0.278 0.9864
473 0.031 0.775 0.194 0.9524
41558 0.031 0.653 0.315 0.9873

Assignment 5: Logistic Regression

  1. [Task-1] Logistic Regression(either SGDClassifier with log loss, or LogisticRegression) on these feature sets
    • Set 1: categorical, numerical features + project_title(BOW) + preprocessed_eassay (`BOW with bi-grams` with `min_df=10` and `max_features=5000`)
    • Set 2: categorical, numerical features + project_title(TFIDF)+ preprocessed_eassay (`TFIDF with bi-grams` with `min_df=10` and `max_features=5000`)
    • Set 3: categorical, numerical features + project_title(AVG W2V)+ preprocessed_eassay (AVG W2V)
    • Set 4: categorical, numerical features + project_title(TFIDF W2V)+ preprocessed_essay (TFIDF W2V)

  2. Hyper paramter tuning (find best hyper parameters corresponding the algorithm that you choose)
    • Find the best hyper parameter which will give the maximum AUC value
    • Find the best hyper paramter using k-fold cross validation or simple cross validation data
    • Use gridsearch cv or randomsearch cv or you can also write your own for loops to do this task of hyperparameter tuning

  3. Representation of results
    • You need to plot the performance of model both on train data and cross validation data for each hyper parameter, like shown in the figure.
    • Once after you found the best hyper parameter, you need to train your model with it, and find the AUC on test data and plot the ROC curve on both train and test.
    • Along with plotting ROC curve, you need to print the confusion matrix with predicted and original labels of test data points. Please visualize your confusion matrices using seaborn heatmaps.

  4. [Task-2] Apply Logistic Regression on the below feature set Set 5 by finding the best hyper parameter as suggested in step 2 and step 3.
  5. Consider these set of features Set 5 :
    • school_state : categorical data
    • clean_categories : categorical data
    • clean_subcategories : categorical data
    • project_grade_category :categorical data
    • teacher_prefix : categorical data
    • quantity : numerical data
    • teacher_number_of_previously_posted_projects : numerical data
    • price : numerical data
    • sentiment score's of each of the essay : numerical data
    • number of words in the title : numerical data
    • number of words in the combine essays : numerical data
    And apply the Logistic regression on these features by finding the best hyper paramter as suggested in step 2 and step 3

  6. Conclusion

Note: Data Leakage

  1. There will be an issue of data-leakage if you vectorize the entire data and then split it into train/cv/test.
  2. To avoid the issue of data-leakag, make sure to split your data first and then vectorize it.
  3. While vectorizing your data, apply the method fit_transform() on you train data, and apply the method transform() on cv/test data.
  4. For more details please go through this link.

2. Logistic Regression

2.1 Splitting data into Train and cross validation(or test): Stratified Sampling

In [28]:
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use 
    # a. Title, that describes your plot, this will be very helpful to the reader
    # b. Legends if needed
    # c. X-axis label
    # d. Y-axis label

    
    
from sklearn.model_selection import train_test_split

X = project_data.drop(['project_is_approved'], axis=1)
y = project_data['project_is_approved'].values

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify=y, random_state = 123)
X_train, X_cv, y_train, y_cv = train_test_split(X_train, y_train, test_size=0.33, stratify=y_train, random_state = 123)
    
    

2.2 Make Data Model Ready: encoding numerical, categorical features

2.2.1 Vectorizing Numerical features

In [29]:
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
X_train = pd.merge(X_train, price_data, on='id', how='left')
X_cv = pd.merge(X_cv, price_data, on='id', how='left')
X_test = pd.merge(X_test, price_data, on='id', how='left')
In [30]:
from sklearn.preprocessing import Normalizer
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead: 
# array=[105.22 215.96  96.01 ... 368.98  80.53 709.67].
# Reshape your data either using 
# array.reshape(-1, 1) if your data has a single feature 
# array.reshape(1, -1)  if it contains a single sample.
normalizer.fit(X_train['price'].values.reshape(1, -1))

X_train_price_norm = normalizer.transform(X_train['price'].values.reshape(1, -1))
X_cv_price_norm = normalizer.transform(X_cv['price'].values.reshape(1, -1))
X_test_price_norm = normalizer.transform(X_test['price'].values.reshape(1, -1))


X_train_price_norm = X_train_price_norm.reshape(-1,1)
X_cv_price_norm = X_cv_price_norm.reshape(-1,1)
X_test_price_norm = X_test_price_norm.reshape(-1,1)




print("After vectorizations")
print(X_train_price_norm.shape, y_train.shape)
print(X_cv_price_norm.shape, y_cv.shape)
print(X_test_price_norm.shape, y_test.shape)
print("="*100)
After vectorizations
(49041, 1) (49041,)
(24155, 1) (24155,)
(36052, 1) (36052,)
====================================================================================================
In [31]:
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead: 
# array=[105.22 215.96  96.01 ... 368.98  80.53 709.67].
# Reshape your data either using 
# array.reshape(-1, 1) if your data has a single feature 
# array.reshape(1, -1)  if it contains a single sample.
normalizer.fit(X_train['quantity'].values.reshape(1, -1))

X_train_quantity_norm = normalizer.transform(X_train['quantity'].values.reshape(1, -1))
X_cv_quantity_norm = normalizer.transform(X_cv['quantity'].values.reshape(1, -1))
X_test_quantity_norm = normalizer.transform(X_test['quantity'].values.reshape(1, -1))


X_train_quantity_norm = X_train_quantity_norm.reshape(-1,1)
X_cv_quantity_norm = X_cv_quantity_norm.reshape(-1,1)
X_test_quantity_norm = X_test_quantity_norm.reshape(-1,1)




print("After vectorizations")
print(X_train_quantity_norm.shape, y_train.shape)
print(X_cv_quantity_norm.shape, y_cv.shape)
print(X_test_quantity_norm.shape, y_test.shape)
print("="*100)
After vectorizations
(49041, 1) (49041,)
(24155, 1) (24155,)
(36052, 1) (36052,)
====================================================================================================
In [32]:
#X_train_price_standardized
#X_test_price_standardized
In [33]:
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead: 
# array=[105.22 215.96  96.01 ... 368.98  80.53 709.67].
# Reshape your data either using 
# array.reshape(-1, 1) if your data has a single feature 
# array.reshape(1, -1)  if it contains a single sample.
normalizer.fit(X_train['teacher_number_of_previously_posted_projects'].values.reshape(1, -1))

X_train_previously_posted_projects_norm = normalizer.transform(X_train['teacher_number_of_previously_posted_projects'].values.reshape(1, -1))
X_cv_previously_posted_projects_norm = normalizer.transform(X_cv['teacher_number_of_previously_posted_projects'].values.reshape(1, -1))
X_test_previously_posted_projects_norm = normalizer.transform(X_test['teacher_number_of_previously_posted_projects'].values.reshape(1, -1))


X_train_previously_posted_projects_norm = X_train_previously_posted_projects_norm.reshape(-1,1)
X_cv_previously_posted_projects_norm =X_cv_previously_posted_projects_norm.reshape(-1,1)
X_test_previously_posted_projects_norm = X_test_previously_posted_projects_norm.reshape(-1,1)                                                               
  



print("After vectorizations")
print(X_train_previously_posted_projects_norm.shape, y_train.shape)
print(X_cv_previously_posted_projects_norm.shape, y_cv.shape)
print(X_test_previously_posted_projects_norm.shape, y_test.shape)
print("="*100)
After vectorizations
(49041, 1) (49041,)
(24155, 1) (24155,)
(36052, 1) (36052,)
====================================================================================================
In [34]:
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead: 
# array=[105.22 215.96  96.01 ... 368.98  80.53 709.67].
# Reshape your data either using 
# array.reshape(-1, 1) if your data has a single feature 
# array.reshape(1, -1)  if it contains a single sample.
normalizer.fit(X_train['neu'].values.reshape(1, -1))

X_train_neu_norm = normalizer.transform(X_train['neu'].values.reshape(1, -1))
X_cv_neu_norm = normalizer.transform(X_cv['neu'].values.reshape(1, -1))
X_test_neu_norm = normalizer.transform(X_test['neu'].values.reshape(1, -1))


X_train_neu_norm = X_train_neu_norm.reshape(-1,1)
X_cv_neu_norm =X_cv_neu_norm.reshape(-1,1)
X_test_neu_norm = X_test_neu_norm.reshape(-1,1)                                                               
  



print("After vectorizations")
print(X_train_neu_norm.shape, y_train.shape)
print(X_cv_neu_norm.shape, y_cv.shape)
print(X_test_neu_norm.shape, y_test.shape)
print("="*100)
After vectorizations
(49041, 1) (49041,)
(24155, 1) (24155,)
(36052, 1) (36052,)
====================================================================================================
In [35]:
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead: 
# array=[105.22 215.96  96.01 ... 368.98  80.53 709.67].
# Reshape your data either using 
# array.reshape(-1, 1) if your data has a single feature 
# array.reshape(1, -1)  if it contains a single sample.
normalizer.fit(X_train['neg'].values.reshape(1, -1))

X_train_neg_norm = normalizer.transform(X_train['neg'].values.reshape(1, -1))
X_cv_neg_norm = normalizer.transform(X_cv['neg'].values.reshape(1, -1))
X_test_neg_norm = normalizer.transform(X_test['neg'].values.reshape(1, -1))


X_train_neg_norm = X_train_neg_norm.reshape(-1,1)
X_cv_neg_norm =X_cv_neg_norm.reshape(-1,1)
X_test_neg_norm = X_test_neg_norm.reshape(-1,1)                                                               
  



print("After vectorizations")
print(X_train_neg_norm.shape, y_train.shape)
print(X_cv_neg_norm.shape, y_cv.shape)
print(X_test_neg_norm.shape, y_test.shape)
print("="*100)
After vectorizations
(49041, 1) (49041,)
(24155, 1) (24155,)
(36052, 1) (36052,)
====================================================================================================
In [36]:
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead: 
# array=[105.22 215.96  96.01 ... 368.98  80.53 709.67].
# Reshape your data either using 
# array.reshape(-1, 1) if your data has a single feature 
# array.reshape(1, -1)  if it contains a single sample.
normalizer.fit(X_train['pos'].values.reshape(1, -1))

X_train_pos_norm = normalizer.transform(X_train['pos'].values.reshape(1, -1))
X_cv_pos_norm = normalizer.transform(X_cv['pos'].values.reshape(1, -1))
X_test_pos_norm = normalizer.transform(X_test['pos'].values.reshape(1, -1))


X_train_pos_norm = X_train_pos_norm.reshape(-1,1)
X_cv_pos_norm =X_cv_pos_norm.reshape(-1,1)
X_test_pos_norm = X_test_pos_norm.reshape(-1,1)                                                               
  



print("After vectorizations")
print(X_train_pos_norm.shape, y_train.shape)
print(X_cv_pos_norm.shape, y_cv.shape)
print(X_test_pos_norm.shape, y_test.shape)
print("="*100)
After vectorizations
(49041, 1) (49041,)
(24155, 1) (24155,)
(36052, 1) (36052,)
====================================================================================================
In [37]:
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead: 
# array=[105.22 215.96  96.01 ... 368.98  80.53 709.67].
# Reshape your data either using 
# array.reshape(-1, 1) if your data has a single feature 
# array.reshape(1, -1)  if it contains a single sample.
normalizer.fit(X_train['compound'].values.reshape(1, -1))

X_train_compound_norm = normalizer.transform(X_train['compound'].values.reshape(1, -1))
X_cv_compound_norm = normalizer.transform(X_cv['compound'].values.reshape(1, -1))
X_test_compound_norm = normalizer.transform(X_test['compound'].values.reshape(1, -1))


X_train_compound_norm = X_train_compound_norm.reshape(-1,1)
X_cv_compound_norm =X_cv_compound_norm.reshape(-1,1)
X_test_compound_norm = X_test_compound_norm.reshape(-1,1)                                                               
  



print("After vectorizations")
print(X_train_compound_norm.shape, y_train.shape)
print(X_cv_compound_norm.shape, y_cv.shape)
print(X_test_compound_norm.shape, y_test.shape)
print("="*100)
After vectorizations
(49041, 1) (49041,)
(24155, 1) (24155,)
(36052, 1) (36052,)
====================================================================================================
In [38]:
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead: 
# array=[105.22 215.96  96.01 ... 368.98  80.53 709.67].
# Reshape your data either using 
# array.reshape(-1, 1) if your data has a single feature 
# array.reshape(1, -1)  if it contains a single sample.
normalizer.fit(X_train['words_in_essay'].values.reshape(1, -1))

X_train_words_in_essay_norm = normalizer.transform(X_train['words_in_essay'].values.reshape(1, -1))
X_cv_words_in_essay_norm = normalizer.transform(X_cv['words_in_essay'].values.reshape(1, -1))
X_test_words_in_essay_norm = normalizer.transform(X_test['words_in_essay'].values.reshape(1, -1))


X_train_words_in_essay_norm = X_train_words_in_essay_norm.reshape(-1,1)
X_cv_words_in_essay_norm = X_cv_words_in_essay_norm.reshape(-1,1)
X_test_words_in_essay_norm = X_test_words_in_essay_norm.reshape(-1,1)




print("After vectorizations")
print(X_train_words_in_essay_norm.shape, y_train.shape)
print(X_cv_words_in_essay_norm.shape, y_cv.shape)
print(X_test_words_in_essay_norm.shape, y_test.shape)
print("="*100)
After vectorizations
(49041, 1) (49041,)
(24155, 1) (24155,)
(36052, 1) (36052,)
====================================================================================================
In [39]:
X_train['words_in_essay'].isnull().values.any()
Out[39]:
False
In [40]:
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead: 
# array=[105.22 215.96  96.01 ... 368.98  80.53 709.67].
# Reshape your data either using 
# array.reshape(-1, 1) if your data has a single feature 
# array.reshape(1, -1)  if it contains a single sample.
normalizer.fit(X_train['words_in_title'].values.reshape(1, -1))

X_train_words_in_title_norm = normalizer.transform(X_train['words_in_title'].values.reshape(1, -1))
X_cv_words_in_title_norm = normalizer.transform(X_cv['words_in_title'].values.reshape(1, -1))
X_test_words_in_title_norm = normalizer.transform(X_test['words_in_title'].values.reshape(1, -1))


X_train_words_in_title_norm = X_train_words_in_title_norm.reshape(-1,1)
X_cv_words_in_title_norm = X_cv_words_in_title_norm.reshape(-1,1)
X_test_words_in_title_norm = X_test_words_in_title_norm.reshape(-1,1)




print("After vectorizations")
print(X_train_words_in_title_norm.shape, y_train.shape)
print(X_cv_words_in_title_norm.shape, y_cv.shape)
print(X_test_words_in_title_norm.shape, y_test.shape)
print("="*100)
After vectorizations
(49041, 1) (49041,)
(24155, 1) (24155,)
(36052, 1) (36052,)
====================================================================================================

2.2.2 Vectorizing Categorical data

In [41]:
from collections import Counter
vectorizer = CountVectorizer()
vectorizer.fit(X_train['clean_categories'].values) # fit has to happen only on train data

# we use the fitted CountVectorizer to convert the text to vector
X_train_clean_cat_ohe = vectorizer.transform(X_train['clean_categories'].values)
X_cv_clean_cat_ohe = vectorizer.transform(X_cv['clean_categories'].values)
X_test_clean_cat_ohe = vectorizer.transform(X_test['clean_categories'].values)

print("After vectorizations")
print(X_train_clean_cat_ohe.shape, y_train.shape)
print(X_cv_clean_cat_ohe.shape, y_cv.shape)
print(X_test_clean_cat_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
After vectorizations
(49041, 9) (49041,)
(24155, 9) (24155,)
(36052, 9) (36052,)
['appliedlearning', 'care_hunger', 'health_sports', 'history_civics', 'literacy_language', 'math_science', 'music_arts', 'specialneeds', 'warmth']
====================================================================================================
In [42]:
vectorizer = CountVectorizer()
vectorizer.fit(X_train['clean_subcategories'].values) # fit has to happen only on train data

# we use the fitted CountVectorizer to convert the text to vector
X_train_clean_sub_ohe = vectorizer.transform(X_train['clean_subcategories'].values)
X_cv_clean_sub_ohe = vectorizer.transform(X_cv['clean_subcategories'].values)
X_test_clean_sub_ohe = vectorizer.transform(X_test['clean_subcategories'].values)

print("After vectorizations")
print(X_train_clean_sub_ohe.shape, y_train.shape)
print(X_cv_clean_sub_ohe.shape, y_cv.shape)
print(X_test_clean_sub_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
After vectorizations
(49041, 30) (49041,)
(24155, 30) (24155,)
(36052, 30) (36052,)
['appliedsciences', 'care_hunger', 'charactereducation', 'civics_government', 'college_careerprep', 'communityservice', 'earlydevelopment', 'economics', 'environmentalscience', 'esl', 'extracurricular', 'financialliteracy', 'foreignlanguages', 'gym_fitness', 'health_lifescience', 'health_wellness', 'history_geography', 'literacy', 'literature_writing', 'mathematics', 'music', 'nutritioneducation', 'other', 'parentinvolvement', 'performingarts', 'socialsciences', 'specialneeds', 'teamsports', 'visualarts', 'warmth']
====================================================================================================
In [43]:
# you can do the similar thing with state, teacher_prefix and project_grade_category also
In [44]:
vectorizer = CountVectorizer()
vectorizer.fit(X_train['school_state'].values) # fit has to happen only on train data

# we use the fitted CountVectorizer to convert the text to vector
X_train_state_ohe = vectorizer.transform(X_train['school_state'].values)
X_cv_state_ohe = vectorizer.transform(X_cv['school_state'].values)
X_test_state_ohe = vectorizer.transform(X_test['school_state'].values)

print("After vectorizations")
print(X_train_state_ohe.shape, y_train.shape)
print(X_cv_state_ohe.shape, y_cv.shape)
print(X_test_state_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
After vectorizations
(49041, 51) (49041,)
(24155, 51) (24155,)
(36052, 51) (36052,)
['ak', 'al', 'ar', 'az', 'ca', 'co', 'ct', 'dc', 'de', 'fl', 'ga', 'hi', 'ia', 'id', 'il', 'in', 'ks', 'ky', 'la', 'ma', 'md', 'me', 'mi', 'mn', 'mo', 'ms', 'mt', 'nc', 'nd', 'ne', 'nh', 'nj', 'nm', 'nv', 'ny', 'oh', 'ok', 'or', 'pa', 'ri', 'sc', 'sd', 'tn', 'tx', 'ut', 'va', 'vt', 'wa', 'wi', 'wv', 'wy']
====================================================================================================
In [45]:
vectorizer = CountVectorizer()
vectorizer.fit(X_train['teacher_prefix'].fillna(' ').values) # fit has to happen only on train data

# we use the fitted CountVectorizer to convert the text to vector
X_train_teacher_ohe = vectorizer.transform(X_train['teacher_prefix'].fillna(' ').values)
X_cv_teacher_ohe = vectorizer.transform(X_cv['teacher_prefix'].fillna(' ').values)
X_test_teacher_ohe = vectorizer.transform(X_test['teacher_prefix'].fillna(' ').values)

print("After vectorizations")
print(X_train_teacher_ohe.shape, y_train.shape)
print(X_cv_teacher_ohe.shape, y_cv.shape)
print(X_test_teacher_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
#print("="*100)
After vectorizations
(49041, 5) (49041,)
(24155, 5) (24155,)
(36052, 5) (36052,)
['dr', 'mr', 'mrs', 'ms', 'teacher']
In [46]:
vectorizer = CountVectorizer()
vectorizer.fit(X_train['project_grade_category'].values) # fit has to happen only on train data

# we use the fitted CountVectorizer to convert the text to vector
X_train_grade_ohe = vectorizer.transform(X_train['project_grade_category'].values)
X_cv_grade_ohe = vectorizer.transform(X_cv['project_grade_category'].values)
X_test_grade_ohe = vectorizer.transform(X_test['project_grade_category'].values)

print("After vectorizations")
print(X_train_grade_ohe.shape, y_train.shape)
print(X_cv_grade_ohe.shape, y_cv.shape)
print(X_test_grade_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
After vectorizations
(49041, 4) (49041,)
(24155, 4) (24155,)
(36052, 4) (36052,)
['grades_3_5', 'grades_6_8', 'grades_9_12', 'grades_prek_2']
====================================================================================================

2.3 Make Data Model Ready: encoding essay, and project_title

2.3.1 Bag of words

In [47]:
vectorizer = CountVectorizer(min_df=10,ngram_range=(1,2), max_features=5000)
vectorizer.fit(X_train['essay'].values) # fit has to happen only on train data

# we use the fitted CountVectorizer to convert the text to vector
X_train_essay_bow = vectorizer.transform(X_train['essay'].values)
X_cv_essay_bow = vectorizer.transform(X_cv['essay'].values)
X_test_essay_bow = vectorizer.transform(X_test['essay'].values)

print("After vectorizations")
print(X_train_essay_bow.shape, y_train.shape)
print(X_cv_essay_bow.shape, y_cv.shape)
print(X_test_essay_bow.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
After vectorizations
(49041, 5000) (49041,)
(24155, 5000) (24155,)
(36052, 5000) (36052,)
['000', '10', '100', '100 free', '100 percent', '100 students', '11', '12', ...]
====================================================================================================
In [48]:
vectorizer = CountVectorizer()
vectorizer.fit(X_train['project_title'].values) # fit has to happen only on train data

# we use the fitted CountVectorizer to convert the text to vector
X_train_title_bow = vectorizer.transform(X_train['project_title'].values)
X_cv_title_bow = vectorizer.transform(X_cv['project_title'].values)
X_test_title_bow = vectorizer.transform(X_test['project_title'].values)

print("After vectorizations")
print(X_train_title_bow.shape, y_train.shape)
print(X_cv_title_bow.shape, y_cv.shape)
print(X_test_title_bow.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
After vectorizations
(49041, 11647) (49041,)
(24155, 11647) (24155,)
(36052, 11647) (36052,)
['000', '03', '04', '05', '06', '09', '0n', '0s', '10', '100', ...]
====================================================================================================

2.3.2 TFIDF vectorizer

In [49]:
from sklearn.feature_extraction.text import TfidfVectorizer

vectorizer = TfidfVectorizer(min_df=10,ngram_range=(1,2), max_features=5000)
vectorizer.fit(X_train['essay'].values) # fit has to happen only on train data

# we use the fitted CountVectorizer to convert the text to vector
X_train_essay_Tfidf = vectorizer.transform(X_train['essay'].values)
X_cv_essay_Tfidf = vectorizer.transform(X_cv['essay'].values)
X_test_essay_Tfidf = vectorizer.transform(X_test['essay'].values)

print("After vectorizations")
print(X_train_essay_Tfidf.shape, y_train.shape)
print(X_cv_essay_Tfidf.shape, y_cv.shape)
print(X_test_essay_Tfidf.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
After vectorizations
(49041, 5000) (49041,)
(24155, 5000) (24155,)
(36052, 5000) (36052,)
['000', '10', '100', '100 free', '100 percent', '100 students', '11', ...]
====================================================================================================
In [50]:
# Similarly you can vectorize for title also
vectorizer = TfidfVectorizer()
vectorizer.fit(X_train['project_title'].values) # fit has to happen only on train data

# we use the fitted CountVectorizer to convert the text to vector
X_train_title_Tfidf = vectorizer.transform(X_train['project_title'].values)
X_cv_title_Tfidf = vectorizer.transform(X_cv['project_title'].values)
X_test_title_Tfidf = vectorizer.transform(X_test['project_title'].values)

print("After vectorizations")
print(X_train_title_Tfidf.shape, y_train.shape)
print(X_cv_title_Tfidf.shape, y_cv.shape)
print(X_test_title_Tfidf.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
After vectorizations
(49041, 11647) (49041,)
(24155, 11647) (24155,)
(36052, 11647) (36052,)
['000', '03', '04', '05', '06', '09', '0n', '0s', ...]
====================================================================================================

2.3.3 Using Pretrained Models: Avg W2V

In [51]:
'''
# Reading glove vectors in python: https://stackoverflow.com/a/38230349/4084039
def loadGloveModel(gloveFile):
    print ("Loading Glove Model")
    f = open(gloveFile,'r', encoding="utf8")
    model = {}
    for line in tqdm(f):
        splitLine = line.split()
        word = splitLine[0]
        embedding = np.array([float(val) for val in splitLine[1:]])
        model[word] = embedding
    print ("Done.",len(model)," words loaded!")
    return model
model = loadGloveModel('glove.42B.300d.txt')

# ============================
Output:
    
Loading Glove Model
1917495it [06:32, 4879.69it/s]
Done. 1917495  words loaded!

# ============================

words = []
for i in preproced_texts:
    words.extend(i.split(' '))

for i in preproced_titles:
    words.extend(i.split(' '))
print("all the words in the coupus", len(words))
words = set(words)
print("the unique words in the coupus", len(words))

inter_words = set(model.keys()).intersection(words)
print("The number of words that are present in both glove vectors and our coupus", \
      len(inter_words),"(",np.round(len(inter_words)/len(words)*100,3),"%)")

words_courpus = {}
words_glove = set(model.keys())
for i in words:
    if i in words_glove:
        words_courpus[i] = model[i]
print("word 2 vec length", len(words_courpus))


# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/

import pickle
with open('glove_vectors', 'wb') as f:
    pickle.dump(words_courpus, f)


'''
Out[51]:
'\n# Reading glove vectors in python: https://stackoverflow.com/a/38230349/4084039\ndef loadGloveModel(gloveFile):\n    print ("Loading Glove Model")\n    f = open(gloveFile,\'r\', encoding="utf8")\n    model = {}\n    for line in tqdm(f):\n        splitLine = line.split()\n        word = splitLine[0]\n        embedding = np.array([float(val) for val in splitLine[1:]])\n        model[word] = embedding\n    print ("Done.",len(model)," words loaded!")\n    return model\nmodel = loadGloveModel(\'glove.42B.300d.txt\')\n\n# ============================\nOutput:\n    \nLoading Glove Model\n1917495it [06:32, 4879.69it/s]\nDone. 1917495  words loaded!\n\n# ============================\n\nwords = []\nfor i in preproced_texts:\n    words.extend(i.split(\' \'))\n\nfor i in preproced_titles:\n    words.extend(i.split(\' \'))\nprint("all the words in the coupus", len(words))\nwords = set(words)\nprint("the unique words in the coupus", len(words))\n\ninter_words = set(model.keys()).intersection(words)\nprint("The number of words that are present in both glove vectors and our coupus",       len(inter_words),"(",np.round(len(inter_words)/len(words)*100,3),"%)")\n\nwords_courpus = {}\nwords_glove = set(model.keys())\nfor i in words:\n    if i in words_glove:\n        words_courpus[i] = model[i]\nprint("word 2 vec length", len(words_courpus))\n\n\n# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/\n\nimport pickle\nwith open(\'glove_vectors\', \'wb\') as f:\n    pickle.dump(words_courpus, f)\n\n\n'
In [52]:
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
# make sure you have the glove_vectors file
with open('glove_vectors', 'rb') as f:
    model = pickle.load(f)
    glove_words =  set(model.keys())
    
    
    
In [53]:
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors_train = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_train['essay'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    cnt_words =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if word in glove_words:
            vector += model[word]
            cnt_words += 1
    if cnt_words != 0:
        vector /= cnt_words
    avg_w2v_vectors_train.append(vector)

print(len(avg_w2v_vectors_train))
print(len(avg_w2v_vectors_train[0]))
print(avg_w2v_vectors_train[0])
100%|██████████████████████████████████████████████████████████████████████████| 49041/49041 [00:11<00:00, 4322.04it/s]
49041
300
[ 5.22458925e-02  1.24916530e-01  2.61037008e-02 -1.19240581e-01
  1.49760000e-02 -2.04756105e-02 -3.10873015e+00 -1.19798165e-02
 -9.66706767e-05 -1.67096594e-02 -5.60448917e-02 -7.62943970e-03
  1.20891095e-01 -7.51985662e-02 -3.09186571e-02 -4.42754547e-02
  5.58611955e-02 -1.06810050e-01  1.16667614e-01 -4.20450383e-02
  4.13216301e-02 -7.35384662e-03 -1.24566414e-02 -3.82023910e-03
 -5.34680211e-02 -1.06418276e-01  1.10645170e-01 -6.61514459e-02
 -1.19406901e-01 -5.53242932e-02 -2.25244240e-01 -1.69916107e-01
  3.30882158e-02  1.63592047e-01 -6.39267293e-03  5.25323105e-02
 -7.47499549e-02  3.23069586e-02 -1.18154038e-02 -1.69920970e-02
 -5.35056361e-02  1.60172041e-01 -7.69415774e-02 -1.25657383e-01
  2.24512274e-02 -8.81778429e-02  5.07888000e-02 -6.98293955e-03
  1.12213353e-02 -1.21494461e-01 -1.47134586e-03 -3.18454188e-02
  4.28475609e-02  2.54965105e-02  1.99499820e-02 -4.39237047e-02
  1.42808827e-01 -1.65522647e-02 -6.86000609e-02  1.88627696e-02
 -7.77561323e-02 -1.63778654e-01  1.36455953e-01 -2.02153038e-02
 -3.65237925e-02  6.93336872e-02  1.63436545e-01 -2.51189353e-02
  8.79100474e-02 -8.32074211e-02 -1.57047944e-01 -2.33795767e-02
 -3.22669602e-02 -4.15146917e-02 -3.53217872e-02 -1.75091014e-01
 -2.06165917e-02 -2.53614045e-02  1.17761278e-02 -3.24161219e-02
  6.58090211e-02 -3.26201883e-01  1.00485190e-01 -2.01833026e-02
 -9.36800720e-02 -7.42121053e-03  1.34033356e-01 -7.25195759e-02
  1.31097732e-01  6.46246105e-02 -5.47103409e-02 -1.83616977e-02
  5.43470887e-02  6.68879541e-02 -1.73353564e-02 -1.69693682e-01
 -2.27221241e+00  7.58902510e-02  1.21721073e-01  7.90243271e-02
 -1.45869722e-01 -3.53302913e-02  1.15759547e-01 -9.02938241e-02
  1.34468987e-01  4.18243699e-02  3.87400684e-02 -7.93054586e-02
  2.90408789e-02  1.39422850e-02 -8.70765865e-02  9.81931594e-03
  5.71173505e-02  2.43403069e-01  6.71171955e-03  7.36034887e-02
 -2.69155071e-01  8.82375218e-02  1.51929205e-02 -1.18782021e-02
 -5.64291654e-03  7.69523429e-02  7.32745520e-02 -1.19408362e-01
  8.93163083e-02  2.77861075e-02  9.44324917e-02  1.74556617e-02
  9.80803233e-03  1.18000690e-01  6.38151045e-02  5.70151955e-02
  2.57031427e-02 -7.91064917e-02  1.70972414e-02 -1.54041190e-01
  1.70538540e-01 -5.61793926e-02  1.13124511e-01  4.15900053e-01
  1.01720880e-01 -3.22175534e-02  1.19148992e-01 -6.47858947e-03
 -8.04039004e-02 -6.20424519e-02  6.24110248e-02  1.20545947e-02
  1.75370855e-01  7.73802932e-03  2.30991045e-02 -6.66234977e-02
  9.00418671e-02 -4.06556639e-02  5.68705308e-02  8.39625865e-02
  1.24967456e-02 -5.08025128e-03 -1.11576112e-01 -2.96329286e-02
  1.05041778e-01 -8.99678496e-03 -1.37711827e-02 -7.68846767e-02
 -2.96298098e-02  5.87822301e-02 -5.25728703e-02  6.21977511e-02
  8.01396764e-02 -6.58420624e-02  3.30709150e-02 -3.75786842e-02
 -8.74165451e-02 -1.55512200e-01  4.08122061e-02 -1.83966534e-02
 -2.49325195e-03  5.06156782e-02 -1.64692657e-01 -9.16160526e-03
  1.25099703e-02  2.65055374e-01 -8.47150000e-02  1.24669068e-02
 -1.05596082e-01 -8.50307624e-02 -6.02083118e-02 -1.13641387e-01
  2.28692865e-02  8.01166738e-02  2.93047305e-02 -8.71976226e-02
 -4.59481338e-02  3.05358098e-02  2.86743436e-02 -1.28260344e-01
 -3.51502712e-02  3.25280947e-02  7.85560256e-02 -5.66869968e-02
  1.43710662e-01 -1.88660722e-02 -4.90891291e-02  1.36544524e-01
 -1.35358993e-01  3.97899759e-02  3.15583371e-02 -1.24417797e-01
  1.96430301e-01  9.03605000e-02  6.45683331e-02  3.45713654e-02
 -7.50400451e-03 -1.92248868e-01 -5.56344880e-02 -4.30850564e-02
 -4.72264466e-02 -2.41189336e-02 -2.82493090e-02  3.86498647e-03
 -1.90509685e-01 -6.96707098e-02 -8.09331541e-02 -1.57626657e-01
 -2.24069241e+00  5.85666143e-02 -1.04197464e-01 -5.02483534e-02
 -9.23089323e-02 -1.39796477e-01 -5.73516391e-02 -1.61610150e-02
 -1.73757203e-02 -9.36521955e-02 -3.28051910e-02  2.46312147e-02
  1.12106217e-01 -7.23807391e-02  3.28335541e-02  6.57282744e-02
 -9.07810564e-02  5.66372902e-02 -2.78414889e-01  6.32203436e-02
 -8.08557910e-02 -1.16748647e-02  1.14178564e-02 -1.19497797e-01
 -6.15979139e-02 -1.32468684e-02 -8.70528496e-03  1.10033832e-01
  5.02375083e-02 -2.67974436e-02  8.32264459e-02 -2.40949278e-02
  1.25718120e-01  2.46004511e-03  1.50078600e-01 -4.19139383e-02
  1.46170977e-03 -4.84066953e-02 -2.54503647e-02  2.25536120e-02
  2.53345827e-02 -1.32052275e-01 -8.14239519e-02  3.81269186e-02
  1.13959892e-01  1.56231306e-01 -6.64802346e-02  4.08063979e-02
 -1.58069503e-01  9.61999474e-02 -5.37026820e-02  1.48465812e-02
  1.37041343e-01  3.34976562e-02 -5.22043541e-02  4.35462985e-02
  8.45444541e-02 -6.50991241e-02  3.31731790e-02  1.01290265e-01
  9.12833609e-03  1.48406612e-01 -4.76044511e-02  7.94752857e-02
 -1.84728677e-02 -3.97427842e-02  5.82716030e-02 -3.67535669e-02
 -6.16068226e-02 -9.09007707e-02 -3.43839850e-04  4.79537008e-02
 -7.39385602e-02  1.69907818e-01  9.40105211e-02 -4.58781353e-03]
In [54]:
avg_w2v_vectors_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_cv['essay'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    cnt_words =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if word in glove_words:
            vector += model[word]
            cnt_words += 1
    if cnt_words != 0:
        vector /= cnt_words
    avg_w2v_vectors_cv.append(vector)
100%|██████████████████████████████████████████████████████████████████████████| 24155/24155 [00:05<00:00, 4163.60it/s]
In [55]:
avg_w2v_vectors_test = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_test['essay'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    cnt_words =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if word in glove_words:
            vector += model[word]
            cnt_words += 1
    if cnt_words != 0:
        vector /= cnt_words
    avg_w2v_vectors_test.append(vector)
100%|██████████████████████████████████████████████████████████████████████████| 36052/36052 [00:08<00:00, 4100.37it/s]
In [56]:
# Similarly you can vectorize for title also
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors_titles_train = []; # the avg-w2v for each sentence/review is stored in this list
# for each review/sentence
for sentence in tqdm(X_train['project_title'].values): 
    vector = np.zeros(300) # as word vectors are of zero length
    cnt_words =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if word in glove_words:
            vector += model[word]
            cnt_words += 1
    if cnt_words != 0:
        vector /= cnt_words
    avg_w2v_vectors_titles_train.append(vector)

print(len(avg_w2v_vectors_titles_train))
print(len(avg_w2v_vectors_titles_train[0]))
print(avg_w2v_vectors_titles_train[0])
100%|█████████████████████████████████████████████████████████████████████████| 49041/49041 [00:00<00:00, 61790.50it/s]
49041
300
[-7.94900000e-02  3.55821500e-01  2.73395000e-01 -2.19810000e-01
  2.34155000e-02  9.74340000e-02 -3.13210000e+00 -3.23780000e-01
  3.54570000e-01 -4.02775000e-02 -8.33860000e-02 -3.22450000e-01
  1.33141000e-01  3.03260000e-01 -1.99372000e-01  1.38125000e-01
  2.00645000e-01 -4.05350000e-02 -4.85750000e-02  1.21340000e-01
  4.22645000e-01 -1.57896000e-01  2.45245000e-01  1.11850000e-02
 -1.67840000e-01 -1.19150000e-02  1.07178500e-01  1.42255000e-01
 -1.35620000e-01  2.00610000e-01  4.61950000e-02  1.17500000e-03
 -9.05000000e-03  2.40887500e-01  1.45139000e-01 -1.26282000e-01
  8.68000000e-02  1.62520145e-01 -2.06910000e-01 -7.70700000e-02
 -2.27250000e-01  5.30280000e-01 -1.17250000e-01  9.27550000e-03
  3.55415000e-01  9.40750000e-03  2.07550500e-01  2.78250000e-02
  2.30955000e-01  2.06685500e-01  2.41530500e-01 -7.85400000e-02
  7.83295000e-02  2.49295000e-01  1.82652000e-01  1.47547500e-01
  8.93000000e-02 -8.80905500e-02  2.65120000e-01 -1.76645000e-01
  3.64379500e-01 -2.13155000e-02  6.00265000e-01 -7.76970000e-02
  3.24115500e-01 -5.77250000e-02  3.00240000e-02  3.03069500e-01
 -1.17989000e-01  5.51520000e-02 -3.09975000e-01  1.17553500e-01
 -1.70398500e-01 -1.12477000e-01 -4.79050000e-02 -1.62195360e-01
  1.44071000e-01  2.84975000e-02  2.06797500e-02  1.35930000e-01
  7.01845500e-02 -4.20890000e-01  3.49755000e-01 -1.91750000e-02
  8.67550000e-02 -1.73900000e-02  1.89439000e-01  3.11805000e-02
  2.53930000e-01  1.27163500e-01 -3.24673500e-01  1.40350000e-02
  3.55270000e-01 -9.74325000e-02 -1.09380000e-01 -1.33858500e-01
 -1.84230000e+00 -3.31150000e-02  1.71640000e-01 -7.57450000e-02
 -2.32260000e-01 -3.50415000e-01  2.16630000e-02  2.01155000e-01
  2.75303000e-02 -3.92920000e-01  1.74455000e-01  2.73985000e-01
  4.86585000e-02 -3.76230000e-01 -2.62715000e-01  1.51570000e-01
 -1.86086000e-01  2.07173500e-01 -7.05335000e-02 -1.71265000e-01
 -4.37085000e-02 -1.40350000e-02 -2.17141000e-01  6.56670000e-02
 -5.03717500e-01  2.78780000e-01 -2.54485000e-01 -8.15350000e-02
  2.85956500e-01 -1.98295000e-01 -5.80290000e-02  2.74015000e-01
  2.17782500e-01  1.41235000e-01 -3.12080000e-02  1.83047500e-01
  2.49485000e-01 -2.47370500e-01 -1.89768500e-01 -4.05735000e-01
  1.30300000e-01  2.85000000e-02  3.05480000e-02  5.30780000e-01
  2.15720000e-01 -3.18850000e-01  4.20275000e-01 -8.55300000e-02
 -5.41860000e-02 -2.93460000e-01  2.64270000e-01  6.82370000e-02
 -3.67200000e-02  2.68175000e-01 -1.28005000e-01  1.53717000e-01
  3.25049500e-01  1.36130000e-01  3.19430000e-01  4.61645000e-01
  1.94135000e-01  1.04006500e-01 -7.37095000e-02 -2.96970000e-01
  4.69805000e-01 -1.29275000e-01  3.07757050e-01  1.79894500e-01
 -2.11370000e-01 -3.95800000e-02  8.05320000e-02  3.11150000e-01
  2.86950500e-01 -2.95230000e-01  1.75002550e-01 -2.08620000e-01
 -2.37395000e-01 -5.09900000e-01 -1.60958500e-01 -2.16050000e-02
  1.39295000e-02 -8.23250000e-02 -3.07415000e-01  4.55215000e-02
 -1.66892000e-01  3.14171000e-01 -3.54351000e-01 -3.79500000e-02
 -3.83605000e-01 -8.86125000e-02 -1.45715000e-01 -3.16336500e-01
  3.83090000e-02  4.03335000e-01  2.86570000e-01 -5.57386000e-02
 -1.20500000e-03  1.21307500e-01  2.43030000e-01 -3.72600000e-01
 -5.43050000e-01 -2.44990000e-01 -3.54550000e-02  1.35305000e-01
 -1.83995000e-01  3.32740000e-02 -1.97650000e-01  2.05395000e-01
 -3.03532000e-01  1.98960000e-01 -1.07575000e-01 -1.15635000e-01
  5.89115000e-01  9.82195000e-02  3.57970000e-01  5.64300000e-02
 -1.19120000e-01 -4.14150000e-02  8.89050000e-02 -2.06217500e-01
 -6.51585000e-02  1.96540000e-01  1.41270000e-01 -4.22350000e-02
 -4.27340000e-01 -1.70030000e-01  5.01540000e-02 -4.60900000e-02
 -2.70855000e+00  2.39098500e-01  1.26400000e-01  3.90030000e-01
 -3.41790000e-01 -7.14250000e-01 -2.97950000e-01 -1.55121000e-01
  3.38195000e-01 -3.47430000e-01 -5.41850000e-02 -4.32591500e-02
  1.59410000e-01  2.34740000e-01  6.91430000e-02  8.43400000e-02
 -3.46560000e-01 -2.16630000e-01 -4.94760000e-01  6.04856000e-01
 -3.81945000e-01 -4.49831500e-02  1.01410000e-01  1.05880000e-01
 -2.65000000e-04  3.91255000e-01 -1.94457000e-01  1.71120000e-01
  1.59707500e-01 -2.68320000e-02  2.39900000e-01 -1.37933000e-01
  3.57900000e-01 -1.56410000e-02 -2.63558000e-01  3.05645100e-01
 -2.51515500e-01  2.45220000e-02  1.70845000e-01  7.63085000e-02
  1.32215000e-01 -3.91837000e-01 -1.06920000e-01  4.81665000e-01
  4.30235000e-01  1.94080000e-01 -1.10558300e-01  1.74345500e-01
  2.79895000e-01  6.94400000e-02 -2.02065000e-01 -6.12390000e-02
  3.28800000e-02  1.68435000e-01 -1.83115000e-01 -8.01600000e-02
 -3.94655000e-01  3.11740000e-02 -1.71675500e-01  2.75871500e-01
  2.84026000e-01  2.38700000e-02 -9.16950000e-02  1.21430000e-01
 -2.11391000e-01 -2.19445000e-01  2.48045000e-01  5.35100000e-02
  3.40935000e-01 -1.15300000e-02  4.25000000e-03  4.77420000e-01
 -1.29092500e-01  2.34025000e-01 -2.36786000e-01 -3.12835000e-01]
In [57]:
avg_w2v_vectors_titles_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_cv['project_title'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    cnt_words =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if word in glove_words:
            vector += model[word]
            cnt_words += 1
    if cnt_words != 0:
        vector /= cnt_words
    avg_w2v_vectors_titles_cv.append(vector)
100%|█████████████████████████████████████████████████████████████████████████| 24155/24155 [00:00<00:00, 47420.61it/s]
In [58]:
avg_w2v_vectors_titles_test = []; # the avg-w2v for] each sentence/review is stored in this list
for sentence in tqdm(X_test['project_title'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    cnt_words =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if word in glove_words:
            vector += model[word]
            cnt_words += 1
    if cnt_words != 0:
        vector /= cnt_words
    avg_w2v_vectors_titles_test.append(vector)
100%|█████████████████████████████████████████████████████████████████████████| 36052/36052 [00:00<00:00, 74046.87it/s]

2.3.4 Using Pretrained Models: TFIDF weighted W2V

In [59]:
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
tfidf_model = TfidfVectorizer()
tfidf_model.fit(X_train['essay'].values)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model.get_feature_names(), list(tfidf_model.idf_)))
tfidf_words = set(tfidf_model.get_feature_names())
In [60]:
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_train['essay'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    tf_idf_weight =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if (word in glove_words) and (word in tfidf_words):
            vec = model[word] # getting the vector for each word
            # here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
            tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
            vector += (vec * tf_idf) # calculating tfidf weighted w2v
            tf_idf_weight += tf_idf
    if tf_idf_weight != 0:
        vector /= tf_idf_weight
    tfidf_w2v_vectors.append(vector)

print(len(tfidf_w2v_vectors))
print(len(tfidf_w2v_vectors[0]))
100%|███████████████████████████████████████████████████████████████████████████| 49041/49041 [01:22<00:00, 593.82it/s]
49041
300
In [61]:
tfidf_w2v_vectors_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_cv['essay'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    tf_idf_weight =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if (word in glove_words) and (word in tfidf_words):
            vec = model[word] # getting the vector for each word
            # here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
            tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
            vector += (vec * tf_idf) # calculating tfidf weighted w2v
            tf_idf_weight += tf_idf
    if tf_idf_weight != 0:
        vector /= tf_idf_weight
    tfidf_w2v_vectors_cv.append(vector)
100%|███████████████████████████████████████████████████████████████████████████| 24155/24155 [00:39<00:00, 605.64it/s]
In [62]:
tfidf_w2v_vectors_test = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_test['essay'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    tf_idf_weight =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if (word in glove_words) and (word in tfidf_words):
            vec = model[word] # getting the vector for each word
            # here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
            tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
            vector += (vec * tf_idf) # calculating tfidf weighted w2v
            tf_idf_weight += tf_idf
    if tf_idf_weight != 0:
        vector /= tf_idf_weight
    tfidf_w2v_vectors_test.append(vector)
100%|███████████████████████████████████████████████████████████████████████████| 36052/36052 [00:59<00:00, 608.40it/s]
In [63]:
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
tfidf_model_titles = TfidfVectorizer()
tfidf_model_titles.fit(X_train['project_title'].values)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model_titles.get_feature_names(), list(tfidf_model_titles.idf_)))
tfidf_words_titles = set(tfidf_model_titles.get_feature_names())
In [64]:
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors_titles = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_train['project_title'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    tf_idf_weight =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if (word in glove_words) and (word in tfidf_words_titles):
            vec = model[word] # getting the vector for each word
            # here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
            tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
            vector += (vec * tf_idf) # calculating tfidf weighted w2v
            tf_idf_weight += tf_idf
    if tf_idf_weight != 0:
        vector /= tf_idf_weight
    tfidf_w2v_vectors_titles.append(vector)

print(len(tfidf_w2v_vectors_titles))
print(len(tfidf_w2v_vectors_titles[0]))
100%|█████████████████████████████████████████████████████████████████████████| 49041/49041 [00:01<00:00, 36161.82it/s]
49041
300
In [65]:
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors_titles_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_cv['project_title'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    tf_idf_weight =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if (word in glove_words) and (word in tfidf_words_titles):
            vec = model[word] # getting the vector for each word
            # here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
            tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
            vector += (vec * tf_idf) # calculating tfidf weighted w2v
            tf_idf_weight += tf_idf
    if tf_idf_weight != 0:
        vector /= tf_idf_weight
    tfidf_w2v_vectors_titles_cv.append(vector)

print(len(tfidf_w2v_vectors_titles_cv))
print(len(tfidf_w2v_vectors_titles_cv[0]))
100%|█████████████████████████████████████████████████████████████████████████| 24155/24155 [00:00<00:00, 36365.24it/s]
24155
300
In [66]:
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors_titles_test = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_test['project_title'].values): # for each review/sentence
    vector = np.zeros(300) # as word vectors are of zero length
    tf_idf_weight =0; # num of words with a valid vector in the sentence/review
    for word in sentence.split(): # for each word in a review/sentence
        if (word in glove_words) and (word in tfidf_words_titles):
            vec = model[word] # getting the vector for each word
            # here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
            tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
            vector += (vec * tf_idf) # calculating tfidf weighted w2v
            tf_idf_weight += tf_idf
    if tf_idf_weight != 0:
        vector /= tf_idf_weight
    tfidf_w2v_vectors_titles_test.append(vector)

print(len(tfidf_w2v_vectors_titles_test))
print(len(tfidf_w2v_vectors_titles_test[0]))
100%|█████████████████████████████████████████████████████████████████████████| 36052/36052 [00:00<00:00, 36944.78it/s]
36052
300

2.4 Appling Logistic Regression on different kind of featurization as mentioned in the instructions


Apply Logistic Regression on different kind of featurization as mentioned in the instructions
For Every model that you work on make sure you do the step 2 and step 3 of instrucations

In [67]:
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code

# when you plot any graph make sure you use 
    # a. Title, that describes your plot, this will be very helpful to the reader
    # b. Legends if needed
    # c. X-axis label
    # d. Y-axis label
    
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import accuracy_score
from sklearn.cross_validation import cross_val_score
from collections import Counter
from sklearn.metrics import accuracy_score
from sklearn import cross_validation    

import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score

# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
    
from scipy.sparse import hstack   
import time
from sklearn.metrics import confusion_matrix
C:\Users\francisco.porrata\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\cross_validation.py:41: DeprecationWarning:

This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.

In [68]:
def batch_predict(clf, data):
    # roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
    # not the predicted outputs

    y_data_pred = []
    tr_loop = data.shape[0] - data.shape[0]%1000
    # consider you X_tr shape is 49041, then your tr_loop will be 49041 - 49041%1000 = 49000
    # in this for loop we will iterate unti the last 1000 multiplier
    for i in range(0, tr_loop, 1000):
        y_data_pred.extend(clf.predict_proba(data[i:i+1000])[:,1])
    # we will be predicting for the last data points
    if data.shape[0]%1000 !=0:
        y_data_pred.extend(clf.predict_proba(data[tr_loop:])[:,1])
    
    return y_data_pred
In [69]:
def model_performance(X_tr, y_train,X_cr,y_cv):

    
    train_auc = []
    cv_auc = []
    alpha = [10**-4, 10**-3, 10**-2, 10**-1, 10**0, 10**1, 10**2, 10**3, 10**4]
    for i in tqdm(alpha):
        SGD = SGDClassifier(loss='log', alpha = i, class_weight = 'balanced')
        SGD.fit(X_tr, y_train)

        y_train_pred = SGD.predict_proba(X_tr)[:,1]      
        y_cv_pred = SGD.predict_proba(X_cr)[:,1] 

        # roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
        # not the predicted outputs        
        train_auc.append(roc_auc_score(y_train,y_train_pred))
        cv_auc.append(roc_auc_score(y_cv, y_cv_pred))

    plt.semilogx(alpha, train_auc, label='Train AUC')
    plt.semilogx(alpha, cv_auc, label='CV AUC')

    plt.scatter(alpha, train_auc, label='Train AUC points')
    plt.scatter(alpha, cv_auc, label='CV AUC points')

    plt.legend()
    plt.xlabel("alpha: hyperparameter")
    plt.ylabel("AUC")
    plt.title("ERROR PLOTS")
    plt.grid()
    plt.show()
In [70]:
def best_parameter_ROC(X_tr, y_train,  X_te, y_test, best_alpha):
    SGD = SGDClassifier(loss='log', alpha = best_alpha, class_weight = 'balanced')
    SGD.fit(X_tr, y_train)
    # roc_auc_score(y_true, y_scor, e) the 2nd parameter should be probability estimates of the positive class
    # not the predicted outputs

    y_train_pred = SGD.predict_proba(X_tr)[:,1]    
    y_test_pred = SGD.predict_proba(X_te)[:,1]  

    train_fpr, train_tpr, tr_thresholds = roc_curve(y_train, y_train_pred)
    test_fpr, test_tpr, te_thresholds = roc_curve(y_test, y_test_pred)

    plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
    plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
    plt.legend()
    plt.xlabel("False Positive Rate (fpr)")
    plt.ylabel("True Positive Rate (tpr)")
    plt.title("ROC")
    plt.grid()
    plt.show()
    return (train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred)
In [71]:
# we are writing our own function for predict, with defined thresould
# we will pick a threshold that will give the least fpr
def find_best_threshold(threshould, fpr, tpr):
    t = threshould[np.argmax(tpr*(1-fpr))]
    # (tpr*(1-fpr)) will be maximum if your fpr is very low and tpr is very high
    print("the maximum value of tpr*(1-fpr)", max(tpr*(1-fpr)), "for threshold", np.round(t,3))
    return t

def predict_with_best_t(proba, threshould):
    predictions = []
    for i in proba:
        if i>=threshould:
            predictions.append(1)
        else:
            predictions.append(0)
    return predictions
In [72]:
import seaborn as sns

def print_confusion_matrix(data, title, class_names, figsize = (10,7)):
    df_cm = pd.DataFrame(data, columns=class_names, index = class_names)
    df_cm.index.name = 'Actual'
    df_cm.columns.name = 'Predicted'
    plt.rcParams.update({'font.size': 16})
    plt.title(title)
    sns.set(font_scale=1.4)#for label size
    sns.heatmap(df_cm, cmap="Blues", annot=True, annot_kws={"size": 16}, fmt="d")# font size
    

2.4.1 Applying Logistic Regression on BOW, SET 1

In [73]:
# merge two sparse matrices: https://stackoverflow.com/a/19710648/4084039

X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, \
               X_train_words_in_title_norm,X_train_essay_bow, X_train_title_bow, X_train_state_ohe, X_train_teacher_ohe, \
               X_train_grade_ohe, X_train_clean_cat_ohe , X_train_clean_sub_ohe , X_train_price_norm, \
               X_train_previously_posted_projects_norm )).tocsr()

X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
               X_cv_words_in_title_norm,X_cv_essay_bow, X_cv_title_bow, X_cv_state_ohe, X_cv_teacher_ohe, \
               X_cv_grade_ohe, X_cv_clean_cat_ohe,   X_cv_clean_sub_ohe , \
               X_cv_price_norm,X_cv_previously_posted_projects_norm )).tocsr()

X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
               X_test_words_in_title_norm,X_test_essay_bow, X_test_title_bow, X_test_state_ohe, X_test_teacher_ohe, \
               X_test_grade_ohe, X_test_clean_cat_ohe, X_test_clean_sub_ohe , X_test_price_norm, \
               X_test_previously_posted_projects_norm )).tocsr()


print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)

#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)

model_performance(X_tr, y_train,X_cr,y_cv)
Final Data matrix
(49041, 16754) (49041,)
(24155, 16754) (24155,)
(36052, 16754) (36052,)
====================================================================================================
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00,  4.85it/s]
In [74]:
#best k using loop
best_alpha_bow_loop = .01
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train,  X_te, y_test, best_alpha_bow_loop)
In [75]:
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)

data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
====================================================================================================
the maximum value of tpr*(1-fpr) 0.5152908755793354 for threshold 0.524
In [76]:
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])

2.4.2 Applying Logistic Regression on TFIDF, SET 2

In [77]:
X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, X_train_words_in_title_norm, \
               X_train_essay_Tfidf, X_train_title_Tfidf, X_train_state_ohe, X_train_teacher_ohe, X_train_grade_ohe, \
               X_train_clean_cat_ohe , X_train_clean_sub_ohe , X_train_price_norm,X_train_previously_posted_projects_norm )).tocsr()

X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
               X_cv_words_in_title_norm,X_cv_essay_Tfidf, X_cv_title_Tfidf, X_cv_state_ohe, X_cv_teacher_ohe, X_cv_grade_ohe, \
               X_cv_clean_cat_ohe , X_cv_clean_sub_ohe , X_cv_price_norm,X_cv_previously_posted_projects_norm )).tocsr()

X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
               X_test_words_in_title_norm,X_test_essay_Tfidf, X_test_title_Tfidf, X_test_state_ohe, X_test_teacher_ohe, \
               X_test_grade_ohe, X_test_clean_cat_ohe , X_test_clean_sub_ohe , X_test_price_norm, \
               X_test_previously_posted_projects_norm )).tocsr()

print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)

#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)

model_performance(X_tr, y_train,X_cr,y_cv)
Final Data matrix
(49041, 16754) (49041,)
(24155, 16754) (24155,)
(36052, 16754) (36052,)
====================================================================================================
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00,  4.87it/s]
In [78]:
best_alpha_tfidf_loop = .0001
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train,  X_te, y_test, best_alpha_tfidf_loop)
In [79]:
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)

data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
====================================================================================================
the maximum value of tpr*(1-fpr) 0.5373465629025561 for threshold 0.546
In [80]:
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])

2.4.3 Applying Logistic Regression on AVG W2V, SET 3

In [95]:
X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, X_train_words_in_title_norm, \
               avg_w2v_vectors_train, avg_w2v_vectors_titles_train, X_train_state_ohe, X_train_teacher_ohe, X_train_grade_ohe, \
               X_train_clean_cat_ohe , X_train_clean_sub_ohe , X_train_price_norm,X_train_previously_posted_projects_norm )).tocsr()

X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
               X_cv_words_in_title_norm,avg_w2v_vectors_cv, avg_w2v_vectors_titles_cv, X_cv_state_ohe, \
               X_cv_teacher_ohe, X_cv_grade_ohe, X_cv_clean_cat_ohe , X_cv_clean_sub_ohe , X_cv_price_norm, \
               X_cv_previously_posted_projects_norm )).tocsr()

X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
               X_test_words_in_title_norm,avg_w2v_vectors_test, avg_w2v_vectors_titles_test, X_test_state_ohe, \
               X_test_teacher_ohe, X_test_grade_ohe, X_test_clean_cat_ohe , X_test_clean_sub_ohe , \
               X_test_price_norm,X_test_previously_posted_projects_norm )).tocsr()

print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)

#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)

model_performance(X_tr, y_train,X_cr,y_cv)
Final Data matrix
(49041, 707) (49041,)
(24155, 707) (24155,)
(36052, 707) (36052,)
====================================================================================================
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:05<00:00,  1.54it/s]
In [96]:
best_alpha_w2v_loop = .001
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train,  X_te, y_test, best_alpha_w2v_loop)
In [83]:
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)

data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
====================================================================================================
the maximum value of tpr*(1-fpr) 0.38844969270109314 for threshold 0.13
In [84]:
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])

2.4.4 Applying Logistic Regression on TFIDF W2V, SET 4

In [85]:
X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, X_train_words_in_title_norm, \
               tfidf_w2v_vectors, tfidf_w2v_vectors_titles, X_train_state_ohe, X_train_teacher_ohe, X_train_grade_ohe, \
               X_train_clean_cat_ohe , X_train_clean_sub_ohe , X_train_price_norm,X_train_previously_posted_projects_norm )).tocsr()

X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
               X_cv_words_in_title_norm,tfidf_w2v_vectors_cv, tfidf_w2v_vectors_titles_cv, X_cv_state_ohe, \
               X_cv_teacher_ohe, X_cv_grade_ohe, X_cv_clean_cat_ohe , X_cv_clean_sub_ohe , \
               X_cv_price_norm,X_cv_previously_posted_projects_norm )).tocsr()

X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
               X_test_words_in_title_norm,tfidf_w2v_vectors_test, tfidf_w2v_vectors_titles_test, X_test_state_ohe, \
               X_test_teacher_ohe, X_test_grade_ohe, X_test_clean_cat_ohe , X_test_clean_sub_ohe , \
               X_test_price_norm,X_test_previously_posted_projects_norm )).tocsr()

print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)

#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)

model_performance(X_tr, y_train,X_cr,y_cv)
Final Data matrix
(49041, 707) (49041,)
(24155, 707) (24155,)
(36052, 707) (36052,)
====================================================================================================
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:05<00:00,  1.66it/s]
In [86]:
best_alpha_tfidfw2v_loop = .001
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train,  X_te, y_test, best_alpha_tfidfw2v_loop)
In [87]:
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)

data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
====================================================================================================
the maximum value of tpr*(1-fpr) 0.4296980655689866 for threshold 0.442
In [88]:
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])

2.5 Logistic Regression with added Features `Set 5`

In [97]:
#No text features.  

X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, X_train_words_in_title_norm, \
               X_train_state_ohe, X_train_teacher_ohe, X_train_grade_ohe, X_train_clean_cat_ohe , \
               X_train_clean_sub_ohe , X_train_price_norm,X_train_previously_posted_projects_norm )).tocsr()
X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
               X_cv_words_in_title_norm,X_cv_state_ohe, X_cv_teacher_ohe, X_cv_grade_ohe, X_cv_clean_cat_ohe , X_cv_clean_sub_ohe , X_cv_price_norm,X_cv_previously_posted_projects_norm )).tocsr()
X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
               X_test_words_in_title_norm,X_test_state_ohe, X_test_teacher_ohe, X_test_grade_ohe, X_test_clean_cat_ohe , X_test_clean_sub_ohe , X_test_price_norm,X_test_previously_posted_projects_norm )).tocsr()

print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)

#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)

model_performance(X_tr, y_train,X_cr,y_cv)
Final Data matrix
(49041, 107) (49041,)
(24155, 107) (24155,)
(36052, 107) (36052,)
====================================================================================================
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 14.47it/s]
In [98]:
best_alpha_no_text_loop = .01
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train,  X_te, y_test, best_alpha_no_text_loop)
In [91]:
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)

data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
====================================================================================================
the maximum value of tpr*(1-fpr) 0.30790657657617715 for threshold 0.517
In [92]:
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])

3. Conclusions

In [99]:
#### http://zetcode.com/python/prettytable/
from prettytable import PrettyTable

# Using Loop to determine best Hyperparameters

x = PrettyTable()
x.field_names = ["Vectorizer", "Model", "Hyperparameter", "AUC"]
x.add_row(["BOW", "LR",best_alpha_bow_loop,0.7134])
x.add_row(["TFIDF", "LR",best_alpha_tfidf_loop,0.6945])
x.add_row(["W2V", "LR",best_alpha_w2v_loop,0.6752])
x.add_row(["TFIDFW2V", "LR",best_alpha_tfidfw2v_loop,0.6878])

print(x)
+------------+-------+----------------+--------+
| Vectorizer | Model | Hyperparameter |  AUC   |
+------------+-------+----------------+--------+
|    BOW     |   LR  |      0.01      | 0.7134 |
|   TFIDF    |   LR  |     0.0001     | 0.6945 |
|    W2V     |   LR  |     0.001      | 0.6752 |
|  TFIDFW2V  |   LR  |     0.001      | 0.6878 |
+------------+-------+----------------+--------+
In [100]:
# No text features
x = PrettyTable()
x.field_names = ["Vectorizer", "Model", "Hyperparameter", "AUC"]
x.add_row(["No Text Vectorizer", "LR",best_alpha_no_text_loop, 0.5626])

print(x)
+--------------------+-------+----------------+--------+
|     Vectorizer     | Model | Hyperparameter |  AUC   |
+--------------------+-------+----------------+--------+
| No Text Vectorizer |   LR  |      0.01      | 0.5626 |
+--------------------+-------+----------------+--------+

Observations
1) BOW gave the best Test AUC
2) Adding the text vectors definitively improved the performance for all types of vectorizers